Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
Add more filters










Publication year range
1.
J Comput Neurosci ; 52(2): 125-131, 2024 May.
Article in English | MEDLINE | ID: mdl-38470534

ABSTRACT

Long-term potentiation (LTP) is a synaptic mechanism involved in learning and memory. Experiments have shown that dendritic sodium spikes (Na-dSpikes) are required for LTP in the distal apical dendrites of CA1 pyramidal cells. On the other hand, LTP in perisomatic dendrites can be induced by synaptic input patterns that can be both subthreshold and suprathreshold for Na-dSpikes. It is unclear whether these results can be explained by one unifying plasticity mechanism. Here, we show in biophysically and morphologically realistic compartmental models of the CA1 pyramidal cell that these forms of LTP can be fully accounted for by a simple plasticity rule. We call it the voltage-based Event-Timing-Dependent Plasticity (ETDP) rule. The presynaptic event is the presynaptic spike or release of glutamate. The postsynaptic event is the local depolarization that exceeds a certain plasticity threshold. Our model reproduced the experimentally observed LTP in a variety of protocols, including local pharmacological inhibition of dendritic spikes by tetrodotoxin (TTX). In summary, we have provided a validation of the voltage-based ETDP, suggesting that this simple plasticity rule can be used to model even complex spatiotemporal patterns of long-term synaptic plasticity in neuronal dendrites.


Subject(s)
Action Potentials , CA1 Region, Hippocampal , Dendrites , Long-Term Potentiation , Models, Neurological , Pyramidal Cells , Dendrites/physiology , Long-Term Potentiation/physiology , Pyramidal Cells/physiology , Animals , CA1 Region, Hippocampal/physiology , CA1 Region, Hippocampal/cytology , Action Potentials/physiology , Neuronal Plasticity/physiology , Tetrodotoxin/pharmacology , Computer Simulation
2.
Sci Rep ; 11(1): 7615, 2021 04 07.
Article in English | MEDLINE | ID: mdl-33828151

ABSTRACT

Modeling long-term neuronal dynamics may require running long-lasting simulations. Such simulations are computationally expensive, and therefore it is advantageous to use simplified models that sufficiently reproduce the real neuronal properties. Reducing the complexity of the neuronal dendritic tree is one option. Therefore, we have developed a new reduced-morphology model of the rat CA1 pyramidal cell which retains major dendritic branch classes. To validate our model with experimental data, we used HippoUnit, a recently established standardized test suite for CA1 pyramidal cell models. The HippoUnit allowed us to systematically evaluate the somatic and dendritic properties of the model and compare them to models publicly available in the ModelDB database. Our model reproduced (1) somatic spiking properties, (2) somatic depolarization block, (3) EPSP attenuation, (4) action potential backpropagation, and (5) synaptic integration at oblique dendrites of CA1 neurons. The overall performance of the model in these tests achieved higher biological accuracy compared to other tested models. We conclude that, due to its realistic biophysics and low morphological complexity, our model captures key physiological features of CA1 pyramidal neurons and shortens computational time, respectively. Thus, the validated reduced-morphology model can be used for computationally demanding simulations as a substitute for more complex models.


Subject(s)
CA1 Region, Hippocampal/pathology , CA1 Region, Hippocampal/physiology , Dendrites/physiology , Action Potentials/physiology , Animals , Computer Simulation , Databases, Factual , Dendrites/pathology , Hippocampus/physiology , Models, Neurological , Neuronal Plasticity/physiology , Neurons/metabolism , Pyramidal Cells/physiology , Rats , Synapses/physiology , Synaptic Transmission/physiology
3.
PLoS Comput Biol ; 11(11): e1004588, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26544038

ABSTRACT

Long-term potentiation (LTP) and long-term depression (LTD) are widely accepted to be synaptic mechanisms involved in learning and memory. It remains uncertain, however, which particular activity rules are utilized by hippocampal neurons to induce LTP and LTD in behaving animals. Recent experiments in the dentate gyrus of freely moving rats revealed an unexpected pattern of LTP and LTD from high-frequency perforant path stimulation. While 400 Hz theta-burst stimulation (400-TBS) and 400 Hz delta-burst stimulation (400-DBS) elicited substantial LTP of the tetanized medial path input and, concurrently, LTD of the non-tetanized lateral path input, 100 Hz theta-burst stimulation (100-TBS, a normally efficient LTP protocol for in vitro preparations) produced only weak LTP and concurrent LTD. Here we show in a biophysically realistic compartmental granule cell model that this pattern of results can be accounted for by a voltage-based spike-timing-dependent plasticity (STDP) rule combined with a relatively fast Bienenstock-Cooper-Munro (BCM)-like homeostatic metaplasticity rule, all on a background of ongoing spontaneous activity in the input fibers. Our results suggest that, at least for dentate granule cells, the interplay of STDP-BCM plasticity rules and ongoing pre- and postsynaptic background activity determines not only the degree of input-specific LTP elicited by various plasticity-inducing protocols, but also the degree of associated LTD in neighboring non-tetanized inputs, as generated by the ongoing constitutive activity at these synapses.


Subject(s)
Dentate Gyrus/physiology , Long-Term Potentiation/physiology , Long-Term Synaptic Depression/physiology , Models, Neurological , Neuronal Plasticity/physiology , Action Potentials/physiology , Animals , Computational Biology , Rats , Synapses/physiology
4.
Front Mol Neurosci ; 8: 42, 2015.
Article in English | MEDLINE | ID: mdl-26300724

ABSTRACT

The long-lasting enhancement of synaptic effectiveness known as long-term potentiation (LTP) is considered to be the cellular basis of long-term memory. LTP elicits changes at the cellular and molecular level, including temporally specific alterations in gene networks. LTP can be seen as a biological process in which a transient signal sets a new homeostatic state that is "remembered" by cellular regulatory systems. Previously, we have shown that early growth response (Egr) transcription factors are of fundamental importance to gene networks recruited early after LTP induction. From a systems perspective, we hypothesized that these networks will show less stable architecture, while networks recruited later will exhibit increased stability, being more directly related to LTP consolidation. Using random Boolean network (RBN) simulations we found that the network derived at 24 h was markedly more stable than those derived at 20 min or 5 h post-LTP. This temporal effect on the vulnerability of the networks is mirrored by what is known about the vulnerability of LTP and memory itself. Differential gene co-expression analysis further highlighted the importance of the Egr family and found a rapid enrichment in connectivity at 20 min, followed by a systematic decrease, providing a potential explanation for the down-regulation of gene expression at 24 h documented in our preceding studies. We also found that the architecture exhibited by a control and the 24 h LTP co-expression networks fit well to a scale-free distribution, known to be robust against perturbations. By contrast the 20 min and 5 h networks showed more truncated distributions. These results suggest that a new homeostatic state is achieved 24 h post-LTP. Together, these data present an integrated view of the genomic response following LTP induction by which the stability of the networks regulated at different times parallel the properties observed at the synapse.

5.
Article in English | MEDLINE | ID: mdl-25698965

ABSTRACT

Computational models of metaplasticity have usually focused on the modeling of single synapses (Shouval et al., 2002). In this paper we study the effect of metaplasticity on network behavior. Our guiding assumption is that the primary purpose of metaplasticity is to regulate synaptic plasticity, by increasing it when input is low and decreasing it when input is high. For our experiments we adopt a model of metaplasticity that demonstrably has this effect for a single synapse; our primary interest is in how metaplasticity thus defined affects network-level phenomena. We focus on a network-level phenomenon called polychronicity, that has a potential role in representation and memory. A network with polychronicity has the ability to produce non-synchronous but precisely timed sequences of neural firing events that can arise from strongly connected groups of neurons called polychronous neural groups (Izhikevich et al., 2004). Polychronous groups (PNGs) develop readily when spiking networks are exposed to repeated spatio-temporal stimuli under the influence of spike-timing-dependent plasticity (STDP), but are sensitive to changes in synaptic weight distribution. We use a technique we have recently developed called Response Fingerprinting to show that PNGs formed in the presence of metaplasticity are significantly larger than those with no metaplasticity. A potential mechanism for this enhancement is proposed that links an inherent property of integrator type neurons called spike latency to an increase in the tolerance of PNG neurons to jitter in their inputs.

6.
Front Aging Neurosci ; 6: 301, 2014.
Article in English | MEDLINE | ID: mdl-25426065

ABSTRACT

The posterior-anterior shift in aging (PASA) is a commonly observed phenomenon in functional neuroimaging studies of aging, characterized by age-related reductions in occipital activity alongside increases in frontal activity. In this work we have investigated the hypothesis as to whether the PASA is also manifested in functional brain network measures such as degree, clustering coefficient, path length and local efficiency. We have performed statistical analysis upon functional networks derived from a fMRI dataset containing data from healthy young, healthy aged, and aged individuals with very mild to mild Alzheimer's disease (AD). Analysis of both task based and resting state functional network properties has indicated that the PASA can also be characterized in terms of modulation of functional network properties, and that the onset of AD appears to accentuate this modulation. We also explore the effect of spatial normalization upon the results of our analysis.

7.
Neural Comput ; 26(9): 2052-73, 2014 Sep.
Article in English | MEDLINE | ID: mdl-24877736

ABSTRACT

A significant feature of spiking neural networks with varying connection delays, such as those in the brain, is the existence of strongly connected groups of neurons known as polychronous neural groups (PNGs). Polychronous groups are found in large numbers in these networks and are proposed by Izhikevich (2006a) to provide a neural basis for representation and memory. When exposed to a familiar stimulus, spiking neural networks produce consistencies in the spiking output data that are the hallmarks of PNG activation. Previous methods for studying the PNG activation response to stimuli have been limited by the template-based methods used to identify PNG activation. In this letter, we outline a new method that overcomes these difficulties by establishing for the first time a probabilistic interpretation of PNG activation. We then demonstrate the use of this method by investigating the claim that PNGs might provide the foundation of a representational system.


Subject(s)
Action Potentials/physiology , Neural Networks, Computer , Neurons/physiology , Bayes Theorem
8.
Cognition ; 125(2): 288-308, 2012 Nov.
Article in English | MEDLINE | ID: mdl-22863413

ABSTRACT

In this article we present a neural network model of sentence generation. The network has both technical and conceptual innovations. Its main technical novelty is in its semantic representations: the messages which form the input to the network are structured as sequences, so that message elements are delivered to the network one at a time. Rather than learning to linearise a static semantic representation as a sequence of words, our network rehearses a sequence of semantic signals, and learns to generate words from selected signals. Conceptually, the network's use of rehearsed sequences of semantic signals is motivated by work in embodied cognition, which posits that the structure of semantic representations has its origin in the serial structure of sensorimotor processing. The rich sequential structure of the network's semantic inputs also allows it to incorporate certain Chomskyan ideas about innate syntactic knowledge and parameter-setting, as well as a more empiricist account of the acquisition of idiomatic syntactic constructions.


Subject(s)
Language Development , Neural Networks, Computer , Cognition , Humans , Learning , Memory, Short-Term , Semantics , Vocabulary
9.
Neural Netw ; 23(7): 819-35, 2010 Sep.
Article in English | MEDLINE | ID: mdl-20510579

ABSTRACT

This paper presents a new modular and integrative sensory information system inspired by the way the brain performs information processing, in particular, pattern recognition. Spiking neural networks are used to model human-like visual and auditory pathways. This bimodal system is trained to perform the specific task of person authentication. The two unimodal systems are individually tuned and trained to recognize faces and speech signals from spoken utterances, respectively. New learning procedures are designed to operate in an online evolvable and adaptive way. Several ways of modelling sensory integration using spiking neural network architectures are suggested and evaluated in computer experiments.


Subject(s)
Auditory Pathways/physiology , Computer Simulation , Models, Neurological , Nerve Net/physiology , Pattern Recognition, Physiological/physiology , Visual Pathways/physiology , Humans , Neural Networks, Computer , Neurons/physiology
10.
Cogn Neurodyn ; 2(4): 319-34, 2008 Dec.
Article in English | MEDLINE | ID: mdl-19003458

ABSTRACT

The paper introduces a novel computational approach to brain dynamics modeling that integrates dynamic gene-protein regulatory networks with a neural network model. Interaction of genes and proteins in neurons affects the dynamics of the whole neural network. Through tuning the gene-protein interaction network and the initial gene/protein expression values, different states of the neural network dynamics can be achieved. A generic computational neurogenetic model is introduced that implements this approach. It is illustrated by means of a simple neurogenetic model of a spiking neural network of the generation of local field potential. Our approach allows for investigation of how deleted or mutated genes can alter the dynamics of a model neural network. We conclude with the proposal how to extend this approach to model cognitive neurodynamics.

12.
J Neurophysiol ; 98(2): 1048-51, 2007 Aug.
Article in English | MEDLINE | ID: mdl-17537906

ABSTRACT

Heterosynaptic long-term depression (LTD) is conventionally defined as occurring at synapses that are inactive during a time when neighboring synapses are activated by high-frequency stimulation. A new model that combines computational properties of both the Bienenstock, Cooper and Munro model and spike timing-dependent plasticity, however, suggests that such LTD actually may require presynaptic activity in the depressed pathway. We tested experimentally whether presynaptic activity is in fact necessary for previously described heterosynaptic LTD in lateral perforant path synapses in the dentate gyrus of urethane-anesthetized rats. As predicted by the model, procaine infusion into the lateral path fibers, sufficient to transiently block neural activity in this pathway, prevented the induction of LTD in the lateral path following medial path high-frequency stimulation. These data indicate that the previously described heterosynaptic LTD in the dentate gyrus in vivo is actually a form of homosynaptic LTD, requiring presynaptic activity in the depressed pathway.


Subject(s)
Anesthesia , Dentate Gyrus/physiology , Excitatory Postsynaptic Potentials/physiology , Long-Term Synaptic Depression/physiology , Synapses/physiology , Anesthetics/pharmacology , Animals , Computer Simulation , Dose-Response Relationship, Drug , Electric Stimulation , Excitatory Postsynaptic Potentials/drug effects , In Vitro Techniques , Male , Models, Neurological , Perforant Pathway/drug effects , Perforant Pathway/physiology , Procaine/pharmacology , Rats , Rats, Sprague-Dawley , Time Factors
13.
J Comput Neurosci ; 22(2): 129-33, 2007 Apr.
Article in English | MEDLINE | ID: mdl-17053995

ABSTRACT

We have combined the nearest neighbour additive spike-timing-dependent plasticity (STDP) rule with the Bienenstock, Cooper and Munro (BCM) sliding modification threshold in a computational model of heterosynaptic plasticity in the hippocampal dentate gyrus. As a result we can reproduce (1) homosynaptic long-term potentiation of the tetanized input, and (2) heterosynaptic long-term depression of the untetanized input, as observed in real experiments.


Subject(s)
Action Potentials/physiology , Dentate Gyrus/cytology , Models, Neurological , Neuronal Plasticity/physiology , Neurons/physiology , Synapses/physiology , Animals , Time Factors
14.
Neural Netw ; 20(2): 236-44, 2007 Mar.
Article in English | MEDLINE | ID: mdl-16687236

ABSTRACT

Recurrent neural networks are often employed in the cognitive science community to process symbol sequences that represent various natural language structures. The aim is to study possible neural mechanisms of language processing and aid in development of artificial language processing systems. We used data sets containing recursive linguistic structures and trained the Elman simple recurrent network (SRN) for the next-symbol prediction task. Concentrating on neuron activation clusters in the recurrent layer of SRN we investigate the network state space organization before and after training. Given a SRN and a training stream, we construct predictive models, called neural prediction machines, that directly employ the state space dynamics of the network. We demonstrate two important properties of representations of recursive symbol series in the SRN. First, the clusters of recurrent activations emerging before training are meaningful and correspond to Markov prediction contexts. We show that prediction states that naturally arise in the SRN initialized with small random weights approximately correspond to states of Variable Memory Length Markov Models (VLMM) based on individual symbols (i.e. words). Second, we demonstrate that during training, the SRN reorganizes its state space according to word categories and their grammatical subcategories, and the next-symbol prediction is again based on the VLMM strategy. However, after training, the prediction is based on word categories and their grammatical subcategories rather than individual words. Our conclusion holds for small depths of recursions that are comparable to human performances. The methods of SRN training and analysis of its state space introduced in this paper are of a general nature and can be used for investigation of processing of any other symbol time series by means of SRN.


Subject(s)
Learning/physiology , Linguistics , Nerve Net/physiology , Neural Networks, Computer , Artificial Intelligence , Humans , Language Development , Markov Chains , Models, Neurological , Neurons/physiology
15.
Int J Neural Syst ; 16(3): 215-26, 2006 Jun.
Article in English | MEDLINE | ID: mdl-17044242

ABSTRACT

The paper presents a methodology for using computational neurogenetic modelling (CNGM) to bring new original insights into how genes influence the dynamics of brain neural networks. CNGM is a novel computational approach to brain neural network modelling that integrates dynamic gene networks with artificial neural network model (ANN). Interaction of genes in neurons affects the dynamics of the whole ANN model through neuronal parameters, which are no longer constant but change as a function of gene expression. Through optimization of interactions within the internal gene regulatory network (GRN), initial gene/protein expression values and ANN parameters, particular target states of the neural network behaviour can be achieved, and statistics about gene interactions can be extracted. In such a way, we have obtained an abstract GRN that contains predictions about particular gene interactions in neurons for subunit genes of AMPA, GABAA and NMDA neuro-receptors. The extent of sequence conservation for 20 subunit proteins of all these receptors was analysed using standard bioinformatics multiple alignment procedures. We have observed abundance of conserved residues but the most interesting observation has been the consistent conservation of phenylalanine (F at position 269) and leucine (L at position 353) in all 20 proteins with no mutations. We hypothesise that these regions can be the basis for mutual interactions. Existing knowledge on evolutionary linkage of their protein families and analysis at molecular level indicate that the expression of these individual subunits should be coordinated, which provides the biological justification for our optimized GRN.


Subject(s)
Computer Simulation , Models, Genetic , Neural Networks, Computer , Neurosciences , Algorithms , Amino Acid Sequence , Mathematics , Molecular Sequence Data , Sequence Alignment
16.
IEEE Trans Neural Netw ; 15(1): 6-15, 2004 Jan.
Article in English | MEDLINE | ID: mdl-15387243

ABSTRACT

In this paper, we elaborate upon the claim that clustering in the recurrent layer of recurrent neural networks (RNNs) reflects meaningful information processing states even prior to training [1], [2]. By concentrating on activation clusters in RNNs, while not throwing away the continuous state space network dynamics, we extract predictive models that we call neural prediction machines (NPMs). When RNNs with sigmoid activation functions are initialized with small weights (a common technique in the RNN community), the clusters of recurrent activations emerging prior to training are indeed meaningful and correspond to Markov prediction contexts. In this case, the extracted NPMs correspond to a class of Markov models, called variable memory length Markov models (VLMMs). In order to appreciate how much information has really been induced during the training, the RNN performance should always be compared with that of VLMMs and NPMs extracted before training as the "null" base models. Our arguments are supported by experiments on a chaotic symbolic sequence and a context-free language with a deep recursive structure. Index Terms-Complex symbolic sequences, information latching problem, iterative function systems, Markov models, recurrent neural networks (RNNs).


Subject(s)
Markov Chains , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...